235 research outputs found

    Study on State Predictive Controllers for Networked Control System

    Get PDF
    When different control components of a closed loop control system are connected through a common network channel then the resulting control system is a Networked Control System. This spatially distributed system has several advantages like reduced system wiring, easy fault detection and maintenance capability.Unfortunately the introduction of communication channel results in several disadvantages like network induced delays and packet dropouts leading to loss of synchronism in the control system. The network induced imperfections causes system instability and complexity for the control engineers to design a suitable controller in order to compensate their effect on closed loop control system. In addition to the complexity in design the network induced imperfections should be measured, analysed by incorporating them in the closed loop control system. The project investigates the problem of network induced time delays in a networked control system by studying the behaviour of network induced time delay in a control system controlled by Linear Quadratic controller or a Pole placement controller using the states obtained from discrete Kalman filter state estimation, which estimates the current state in the presence of state and output noises. Further a control augmentation method is used by incorporating network induced delay in the plant model control vector. The time delayed control vector creates difficulty in designing the controller which is solved by time shifting approach. Further a state predictor is designed by using plant model transition matrix to predict the future states from present and past values of control vector and state estimate. Hence an optimal predictive controller is designed wherein the Linear Quadratic or pole placement controller uses the predictive state obtained from the state predictor to compensate the effect of network induced time delay and improve the control system performance

    Privacy preserving data mining

    Get PDF
    A fruitful direction for future data mining research will be the development of technique that incorporates privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We analyze the possibility of privacy in data mining techniques in two phasesrandomization and reconstruction. Data mining services require accurate input data for their results to be meaningful, but privacy concerns may influence users to provide spurious information. To preserve client privacy in the data mining process, techniques based on random perturbation of data records are used. Suppose there are many clients, each having some personal information, and one server, which is interested only in aggregate, statistically significant, properties of this information. The clients can protect privacy of their data by perturbing it with a randomization algorithm and then submitting the randomized version. This approach is called randomization. The randomization algorithm is chosen so that aggregate properties of the data can be recovered with sufficient precision, while individual entries are significantly distorted. For the concept of using value distortion to protect privacy to be useful, we need to be able to reconstruct the original data distribution so that data mining techniques can be effectively utilized to yield the required statistics. Analysis Let xi be the original instance of data at client i. We introduce a random shift yi using randomization technique explained below. The server runs the reconstruction algorithm (also explained below) on the perturbed value zi = xi + yi to get an approximate of the original data distribution suitable for data mining applications. Randomization We have used the following randomizing operator for data perturbation: Given x, let R(x) be x+€ (mod 1001) where € is chosen uniformly at random in {-100…100}. Reconstruction of discrete data set P(X=x) = f X (x) ----Given P(Y=y) = F y (y) ---Given P (Z=z) = f Z (z) ---Given f (X/Z) = P(X=x | Z=z) = P(X=x, Z=z)/P (Z=z) = P(X=x, X+Y=Z)/ f Z (z) = P(X=x, Y=Z - X)/ f Z (z) = P(X=x)*P(Y=Z-X)/ f Z (z) = P(X=x)*P(Y=y)/ f Z (z) Results In this project we have done two aspects of privacy preserving data mining. The first phase involves perturbing the original data set using ‘randomization operator’ techniques and the second phase deals with reconstructing the randomized data set using the proposed algorithm to get an approximate of the original data set. The performance metrics like percentage deviation, accuracy and privacy breaches were calculated. In this project we studied the technical feasibility of realizing privacy preserving data mining. The basic promise was that the sensitive values in a user’s record will be perturbed using a randomizing function and an approximate of the perturbed data set be recovered using reconstruction algorithm

    Autocamera Calibration for traffic surveillance cameras with wide angle lenses

    Full text link
    We propose a method for automatic calibration of a traffic surveillance camera with wide-angle lenses. Video footage of a few minutes is sufficient for the entire calibration process to take place. This method takes in the height of the camera from the ground plane as the only user input to overcome the scale ambiguity. The calibration is performed in two stages, 1. Intrinsic Calibration 2. Extrinsic Calibration. Intrinsic calibration is achieved by assuming an equidistant fisheye distortion and an ideal camera model. Extrinsic calibration is accomplished by estimating the two vanishing points, on the ground plane, from the motion of vehicles at perpendicular intersections. The first stage of intrinsic calibration is also valid for thermal cameras. Experiments have been conducted to demonstrate the effectiveness of this approach on visible as well as thermal cameras. Index Terms: fish-eye, calibration, thermal camera, intelligent transportation systems, vanishing point

    Energy-Distortion Tradeoff with Multiple Sources and Feedback

    Get PDF
    Abstract The energy-distortion tradeoff for lossy transmission of sources over multi-user networks is studied. The energydistortion function E(D) is de�ned as the minimum energy required to transmit a source to the receiver within the target distortion D, when there is no restriction on the number of channel uses per source sample. For point-to-point channels, E(D) is shown to be equal to the product of the minimum energy per bit Ebmin and the rate distortion function R(D), indicating the optimality of source-channel separation in this setting. It is shown that the optimal E(D) can also be achieved by the Schalkwijk Kailath (SK) scheme, as well as separate coding, in the presence of perfect channel output feedback. Then, it is shown that the optimality of separation in terms of E(D) does not extend to multi-user networks. The scenario with two encoders observing correlated Gaussian sources in which the encoders communicate to the receiver over a Gaussian multipleaccess channel (MAC) with perfect channel output feedback is studied. First a lower bound on E(D) is provided and compared against two upper bounds achievable by separation and an uncoded SK type scheme, respectively. Even though neither of these achievable schemes meets the lower bound in general, it is shown that their energy requirements lie within a constant gap of E(D) in the low distortion regime, for which the energy requirement grows unbounded. It is shown that the SK based scheme outperforms the separation based scheme in certain scenarios, which establishes the sub-optimality of separation in this multi-user setting. I

    Foreign Direct Investment in India modelled on Globerman & Shapiro Model

    Get PDF
    India one of the emerging giants of the world, growing at almost 9% per year. This paper starts with being historical view on government's policy towards investment and then particularly towards its foreign investment. But the main objective of this paper is to test the applicability of the Globerman & Shapiro's FDI model in India. Further we give account of the impact of the pricnciple factors of this model to FDI in India and various countries

    Rumour Source Detection Using Game Theory

    Get PDF
    Social networks have become a critical part of our lives as they enable us to interact with a lot of people. These networks have become the main sources for creating, sharing and also extracting information regarding various subjects. But all this information may not be true and may contain a lot of unverified rumours that have the potential of spreading incorrect information to the masses, which may even lead to situations of widespread panic. Thus, it is of great importance to identify those nodes and edges that play a crucial role in a network in order to find the most influential sources of rumour spreading. Generally, the basic idea is to classify the nodes and edges in a network with the highest criticality. Most of the existing work regarding the same focuses on using simple centrality measures which focus on the individual contribution of a node in a network. Game-theoretic approaches such as Shapley Value (SV) algorithms suggest that individual marginal contribution should be measured for a given player as the weighted average marginal increase in the yield of any coalition that this player might join. For our experiment, we have played five SV-based games to find the top 10 most influential nodes on three network datasets (Enron, USAir97 and Les Misérables). We have compared our results to the ones obtained by using primitive centrality measures. Our results show that SVbased approach is better at understanding the marginal contribution, and therefore the actual influence, of each node to the entire network
    corecore